15 research outputs found
Advancing Multi-Modal Deep Learning: Towards Language-Grounded Visual Understanding
Using deep learning, computer vision now rivals people at object recognition and detection, opening doors to tackle new challenges in image understanding. Among these challenges, understanding and reasoning about language grounded visual content is of fundamental importance to advancing artificial intelligence. Recently, multiple datasets and algorithms have been created as proxy tasks towards this goal, with visual question answering (VQA) being the most widely studied. In VQA, an algorithm needs to produce an answer to a natural language question about an image. However, our survey of datasets and algorithms for VQA uncovered several sources of dataset bias and sub-optimal evaluation metrics that allowed algorithms to perform well by merely exploiting superficial statistical patterns. In this dissertation, we describe new algorithms and datasets that address these issues. We developed two new datasets and evaluation metrics that enable a more accurate measurement of abilities of a VQA model, and also expand VQA to include new abilities, such as reading text, handling out-of-vocabulary words, and understanding data-visualization. We also created new algorithms for VQA that have helped advance the state-of-the-art for VQA, including an algorithm that surpasses humans on two different chart question answering datasets about bar-charts, line-graphs and pie charts. Finally, we provide a holistic overview of several yet-unsolved challenges in not only VQA but vision and language research at large. Despite enormous progress, we find that a robust understanding and integration of vision and language is still an elusive goal, and much of the progress may be misleading due to dataset bias, superficial correlations and flaws in standard evaluation metrics. We carefully study and categorize these issues for several vision and language tasks and outline several possible paths towards development of safe, robust and trustworthy AI for language-grounded visual understanding
TallyQA: Answering Complex Counting Questions
Most counting questions in visual question answering (VQA) datasets are
simple and require no more than object detection. Here, we study algorithms for
complex counting questions that involve relationships between objects,
attribute identification, reasoning, and more. To do this, we created TallyQA,
the world's largest dataset for open-ended counting. We propose a new algorithm
for counting that uses relation networks with region proposals. Our method lets
relation networks be efficiently used with high-resolution imagery. It yields
state-of-the-art results compared to baseline and recent systems on both
TallyQA and the HowMany-QA benchmark.Comment: To appear in AAAI 2019 ( To download the dataset please go to
http://www.manojacharya.com/
Answer-Type Prediction for Visual Question Answering
Recently, algorithms for object recognition and related tasks have become sufficiently proficient that new vision tasks can now be pursued. In this paper, we build a system capable of answering open-ended text-based questions about images, which is known as Visual Question Answering (VQA). Our approach’s key insight is that we can predict the form of the answer from the question. We formulate our solution in a Bayesian framework. When our approach is combined with a discriminative model, the combined model achieves state-of-the-art results on four benchmark datasets for open-ended VQA: DAQUAR, COCO-QA, The VQA Dataset, and Visual7W
A negative case analysis of visual grounding methods for VQA
Existing Visual Question Answering (VQA) methods tend to exploit dataset
biases and spurious statistical correlations, instead of producing right
answers for the right reasons. To address this issue, recent bias mitigation
methods for VQA propose to incorporate visual cues (e.g., human attention maps)
to better ground the VQA models, showcasing impressive gains. However, we show
that the performance improvements are not a result of improved visual
grounding, but a regularization effect which prevents over-fitting to
linguistic priors. For instance, we find that it is not actually necessary to
provide proper, human-based cues; random, insensible cues also result in
similar improvements. Based on this observation, we propose a simpler
regularization scheme that does not require any external annotations and yet
achieves near state-of-the-art performance on VQA-CPv2
On the Value of Out-of-Distribution Testing: An Example of Goodhart's Law
Out-of-distribution (OOD) testing is increasingly popular for evaluating a
machine learning system's ability to generalize beyond the biases of a training
set. OOD benchmarks are designed to present a different joint distribution of
data and labels between training and test time. VQA-CP has become the standard
OOD benchmark for visual question answering, but we discovered three troubling
practices in its current use. First, most published methods rely on explicit
knowledge of the construction of the OOD splits. They often rely on
``inverting'' the distribution of labels, e.g. answering mostly 'yes' when the
common training answer is 'no'. Second, the OOD test set is used for model
selection. Third, a model's in-domain performance is assessed after retraining
it on in-domain splits (VQA v2) that exhibit a more balanced distribution of
labels. These three practices defeat the objective of evaluating
generalization, and put into question the value of methods specifically
designed for this dataset. We show that embarrassingly-simple methods,
including one that generates answers at random, surpass the state of the art on
some question types. We provide short- and long-term solutions to avoid these
pitfalls and realize the benefits of OOD evaluation